home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Power Tools 1993 November - Disc 2
/
Power Tools Plus (Disc 2 of 2)(November 1993)(HP).iso
/
cpet
/
50916465
/
dec
/
decapx.txt
< prev
next >
Wrap
Text File
|
1993-01-29
|
88KB
|
1,536 lines
DEC Appendix
Alpha AXP Systems Comparison Chart
------------------------------------------------------------------------
DEC 3000 Model DEC 3000 Model
400 AXP 500 AXP
Workstation Workstation DEC 4000 AXP
and Model 400S and Mels 500S Distributed/
AXP Desktop AXP Deskside Departmental
------------------------------------------------------------------------
# of 1 1 610:1,620:2
Processers 630: 3, 640: 4
650: 5, 660: 6
CPU Clock DECchip DECchip DECchip
Speed 21064 21064 21064
133 MHz 150 MHz 160 MHz
Cache Size 8 KB I-cache, 8 KB I-cache, 8 KB I-cache,
(on chip/on 8 KB D-cache/ 8 KB D-cache/ 8 KB D-cache/
board) 512 KB 512 KB 1 MB per proc.
SPECfp92 111.0 125.1 140.9
SPECrate_fp92 N/A N/A 610: 3317.0
620: 6214.5
SPECint92 65.3 74.3 83.5
SPECrate_int92 N/A N/A 610: 1985.8
620: 3816.1
SPECmark89 107.5 121.5 610: 135.5
SPECthruput89 -- -- 620: 247.0
LINPACK 26.4 30.1 36.3
100x100 (Mflops)
Dhrystone V1.1 129.9 146.7 158.8
(MIPS)
Max. Memory 128 MB/ 256 MB/ 512 MB/
(4/16-Mbit 512 MB* 1 GB* 2 GB*
Chip)
Max. Disk 2.1 GB/9.5 GB 4.2 GB/11.6 GB 16 GB/56 GB
Max. I/O 90 MB/sec. 100 MB/sec. 160 MB/sec.
I/O Support Both systems: 2 Both systems: 2 4 SCSI-2, Fast
SCSI-2, 3-slot SCSI-2, 6-slot SCSI-2*, 4 DSSI
TURBOchannel, TURBOchannel, 6-slot
Futurebus+,
Ethernet, FDDI, Ethernet, FDDI, Ethernet,FDDI*,
ISDN*, ISDN*, Prestoserve*,
Prestoserve*, Prestoserve*, HiPPI*,IPI,VME*
DECram DECram, DECram
Workstation: Workstation:
Voice-Quality Audio Voice-Quality
Audio
OpenVMS Ethernet*, Ethernet*, FDDI* Ethernet*,DSSI*
FDDI*
Clusters FDDI*
Workstation HX, TX*, PXG+*, HX, PXG+*, N/A
Graphcis PXGT+* PXGT+*
Entry System $14,995 (W/S) $38,995 (W/S) $77,000
Price-U.S. List $18,995 (Server) $41,195 (Server)
ADVANTAGE- $20,720 (Server) $44,395 (Server) $102,500
SERVER Price
Availability Date Now Now Now
------------------------------------------------------------------------
Alpha AXP Systems Comparison Chart (cont.)
DEC 7000 AXP DEC 10000 AXP
Data Center Mainframe
------------------------------------------------------------------------
# of 610:1,620:2 610:1,620:2
Processers 630: 3, 640: 4 630: 3, 640: 4
650: 5, 660: 6 650: 5, 660: 6
CPU Clock DECchip DECchip
Speed 21064 21064
182 MHz 200 MHz
Cache Size 8 KB I-cache, 8 KB I-cache,
(on chip/on 8 KB D-cache/ 8 KB D-cache/
board) 4 MB per proc. 4 MB per proc.
SPECfp92 178.1 196.5
SPECrate_fp92 610: 4126.0 In progress
640: 15739.4
SPECint92 96.6 106.9
SPECrate_int92 610: 2188.6 In progress
640: 8366.8
SPECmark89 610: 167.4 610: 184.1
SPECthruput89 640: 604.4 640: 654.6
LINPACK 38.6 42.5
100x100
(Mflops)
Dhrystone V1.1 177.3 194.5
(MIPS)
Max. Memory 2 GB/ 2 GB/
(4/16-Mbit 14 GB* 14 GB*
Chip)
Max. Disk 28 GB/284 GB 56 GB/200 GB
(over 10 TB*) (over 10 TB*)
Max. I/O 400 MB/sec. 400 MB/sec.
I/O Support 4 12-slot XMI, 3 4 12-slot XMI, 3
9-slot Futurebus+* 9-slot
10 CI*, 24 DSSI*, Futurebus+*
24 SCSI-2**, 16 10 CI*, 24 DSSI*,
Ethernet, 8 FDDI, 24 SCSI-2**, 16
SDI, Prestoserve*, Ethernet, 8 FDDI,
DECram HiPPI*, IPI*,
VME*, DECram
OpenVMS Ethernet*, DSSI*, Ethernet, DSSI*,
Clusters
CI*, FDDI* CI*, FDDI*
Workstation N/A N/A
Graphcis
Entry System $168,000 $316,000
Price-U.S. List
ADVANTAGE- $187,000 $344,000
SERVER Price
Availability Date Now Q1 Calendar 93
------------------------------------------------------------------------
* Available with upcoming operating system release.
** Eight (8) SCSI-2 controllers supported initially.
N/A=Not applicable.
Features may differ between OpenVMS AXP and DEC OSF/1 AXP systems.
------------------------------------------------------------------------
------------------------------------------------------------------------
VAX Systems Comparison Chart
------------------------------------------------------------------------
VAXstation MicroVAX VAX 4000
4000 VLC, 3100 VAX 4000 Models 400,500
System Models 60 Models 30,40, Model 100 and 600
and 90 80, and 90
------------------------------------------------------------------------
Performance VLC:6.2 30 & 40:22 TPS 51 TPS 400:50 TPS
SPECmark
60:12 SPECmark 80:28 TPS 500:68 TPS
90:32.8 90:34 TPS 600:102 TPS
SPECmark
Relative Proc. N/A 30 & 40:5 24 400:16
Perforamnce x 80:10 500:24
VAX-11/7801 90:24 600:32
# of Processors 1 1 1 1
CPU Clock VLC:25 MHz 30 & 40:25 MHz 72 MHz 400:63 MHz
Speed 60:55 MHz 80:50 MHz 500:72 MHz
90:72 MHz 90:72 MHz 600:83 MHz
Cache Size VLC:8 KB on 30 & 40:6 KB 8 KB 400:8 KB on
chip; on chip; on chip, chip,
60 & 90:2 KB 80: 2 KB 128 KB 500 & 600:10 KB
on chip, 256 KB on chip, on chip;
on board 256 KB on 400 & 500:
board; 128 KB
90:10 KB on on board,
chip, 128 KB 600:512 KB on
on board on board
In-Cabinet 60:upgrades 40 upgrades N/A Each VAX 4000
90 to 80 or 90; system upgrades
80 upgrades to to any higher
90 VAX 4000 system
Alpha-Ready 60 & 90:system 80 & 90:system System System
upgrade to upgrade to
System upgrade to system Alpha Alpha
upgrade desktop distributed/
Upgrade Alpha desktop to Alpha system departmental
desktop
and deskside system system
workstations
I/O Features:
Max. Memory VLC:24 MB 30 & 40:32 MB 128 MB 512 MB
Capacity 60:104 MB 80:72 MB
90:128 MB 90:128 MB
Max. Disk VLC:8.4 GB 8.7 GB 28 GB 56 GB
Capacity 60 & 90:8.7 GB
Max. I/O VLC:5.0 MB/s; 4 MB/s 8.5 MB/s 12.5 MB/s
Throughput 60 & 90:5 MB/s
(SCSI), 50.0 MB/s
(TURBOchannel)
I/O Support VLC: 1 SCSI, 3 DSSI 4 DSSI
Synchronous 1 Ethernet (1 embedded, (2 embedded,
SCSI, Ethernet; 2 Q-bus), 2 Q-bus),
1 Q-bus, 2 Q-bus,
60 & 90: 1 SCSI, 3 Ethernet
Synchronous 3 Ethernet
SCSI, Ethernet,
TURBOchannel
High Availability Features:
VAXcluster Ethernet Ethernet Ethernet, Ethernet,
System Support DSSI DSSI
High Avail. Disk shadowing Disk Disk , Disk shadowing,
shadowing shadowing
Features Supported online online
hardware/ hardware/
sw service sw service
upgrade upgrade
Software Features:
System SW VLC & 60: OpenVMS OpenVMS OpenVMS
OpenVMS
and VAXeln;
90:OpenVMS
Network Appl. NAS 250 NAS 200,300, NAS 200,300, NAS 400
Support SW 400 400
------------------------------------------------------------------------
1 VAX-11/780=1
N/A = not applicable.
------------------------------------------------------------------------
Performance is highly dependent on configuration, application, and
operating environment. Individual workloads should be carefully
evaluated before making performance estimates for specific applications.
In this chart no warranty of system performance is expressed or implied.
------------------------------------------------------------------------
------------------------------------------------------------------------
VAX Systems Comparison Chart
------------------------------------------------------------------------
VAX 6000 VAX 7000 VAX 10000
Models 510 and Models 610 to Models 610 to
System 610 640 640
------------------------------------------------------------------------
Performance 510: 50 TPS 610: 123 TPS 610: 123 TPS
610: 101 TPS Others: N/A Others: N/A
Relative Proc. 510: 13 610: 35 610: 35
Performance x 610: 32 620: up to 65 620: up to 65
VAX-11/7801 630: up to 95 630: up to 95
640: up to 125 640: up to 125
# of Proc. 1 610: 1 610: 1
620: 2 620: 2
630: 3 630: 3
640: 4 640: 4
CPU Clock 510: 63 MHz 91 MHz 91 MHz
Speed 610: 83 MHz
Cache Size 510: 2 KB 10 KB on chip/ 10 KB on chip/
on chip, processor processor
512 KB on 4 MB on board/ 4 MB on board/
board; processor processor
610: 10 KB
on chip,
2 MB on
board
In-Cabinet CPU Each VAX 6000 Each VAX 7000 Each VAX 10000
Upgrade system upgrades system upgrades system upgrades
to any higher to any higher to any higher
VAX 6000 system VAX 7000 system VAX 10000
system
Alpha-Ready System upgrade In-cabinet CPU In-cabinet CPU
System Upgrade to Alpha data upgrade to Alpha upgrade to Alpha
center system data center mainframe-class
system system
I/O Features:
Max. Memory 510: 512 MB 3.5 GB2 3.5 GB2
Capacity 610: 1 GB2
Max. Disk Over 8 TB Over 10 TB Over 10 TB
Capacity
Max. I/O 80 MB/s 400MB/s 400 MB/s
Throughput
I/O Support 1 SMI, 4CI, 4 XMI, 10 CI, 4 XMI, 10 CI,
12 DSSI, 2 FDDI, 24 DSSI, 8 FDDI, 25 DSSI, 8 FDDI,
6 Ethernet, 16 Ethernet, 16 Ethernet,
5 VAXBI, 2 VME 1 VAXBI3, 2 VME 1 BAXBI, 2 VME
High Availability Features:
VAXcluster Ethernet, DSSI, Ethernet, DSSI, Ethernet, DSSI,
System Support CI, FDDI CI, FDDI CI, FDDI
High Avail. Disk shadowing, Disk shadowing, Disk shadowing,
Features online hardware/ N+1 redundant N+1 redundant
Supported software service power system, power system,
upgrade uninterruptible uninterruptible
power system, power system,
battery backup, battery backup,
online hardware/ mainframe-class
software service service, online
upgrade hardware/software
service upgrade
Software Features:
System Software 510: OpenVMS, OpenVMS OpenVMS
ULTRIX
610: OpenVMS
Network Appl. NAS 200, 300, NAS 200, 300, NAS 400
Support SW 400 400
------------------------------------------------------------------------
1 VAX-11/780 = 1
2 512 MB available now, higher capacities available with upcoming
release of OpenVMS and higher density memory modules.
3 Supported by upcoming release of OpenVMS.
------------------------------------------------------------------------
Performance is highly dependent on configuration, application, and
operating environment. Individual workloads should be carefully
evaluated before making performance estimates for specific applications.
In this chart no warranty of system performance is expressed or implied.
------------------------------------------------------------------------
------------------------------------------------------------------------
DECsystems Comparison Chart
------------------------------------------------------------------------
DECsystem 5000 DECsystem 5000 DECsystem 5000
System Model 25 Model 133 Model 240 DECsystem 5900
------------------------------------------------------------------------
CPU1/FPU R3000A/ R3000A/ R3400B R3000A/
R3010 R3010A R3010A
SPECmark2 19.1 25.5 32.4 32.8
MIPS3 26.7 34.4 42.9 42.9
Clock Speed 25 MHz 33 MHz 40 MHz 40 MHz
Cache Size 64 KB (inst.) 64 KB (inst.) 64 KB (inst.) 64 KB (inst.)
64 KB (data) 128 KB (data) 64 KB (data) 64 KB (data)
Memory Cap. 8 MB-40 MB 8 MB-128 MB 16 MB-480 MB 64 MB-448 MB
(type) (parity) (parity) (ECC) (Ecc)
Enclosure Desktop Desktop Desktop Cabinet
Storage Capacity
Internal Up to 426 MB Up to 852 MB None Up to 37.2 GB
Total Up to 25.3 GB Up to 33.5 GB Up to 33.1 GB Up to 37.2 GB
I/O Bus TURBOchannel4 TURBOchannel4 TURBOchannel4 TURBOchannel4
Type VME (opt.) VME (opt.) VME (opt.) VME (opt.)
Peripheral SCSI SCSI SCSI, VME SCSI
Support VME VME CI
Network TCP/IP (std.) TCP/IP (std.) TCP/IP (std.) TCP/IP (std.)
Support NFS (std.) NFS (std.) NFS (std.) NFS (std.)
DECnet-ULTRIX DECnet-ULTRIX DECnet-ULTRIX DECnet-ULTRIX
(opt.) (opt.) (opt.) (opt.)
FDDI (opt.) FDDI (opt.) FDDI (opt.) FDDI (opt.)
------------------------------------------------------------------------
1 From MIPS Computer Systems, Inc.
2 SPECmark is the geometric means of ten compute-intensive, public-
domain benchmarks compared to performance on a VAX-11/780.
3 Million instructions per second based on the Dhrystone benchmarks.
4 Digital and third-party TURBOchannel options are available. Contact a
sales representative or Digital reseller for third-party options
available through the TRI/ADD program.
------------------------------------------------------------------------
------------------------------------------------------------------------
Entry-level Grayscale System Comparisons
------------------------------------------------------------------------
HP 715/33 DEC 20 DEC 133 DEC 25
------------------------------------------------------------------------
SPECint92 24 13.7 20.1 15.8
SPECfp92 45 14.8 23.5 17.5
SPECmark89 46 16.3 25.5 19.1
MIPS 41 21.6 34.4 26.7
MFLOPS 8.6 2.4 5.9 2.8
(Linpack DP)
Processor PA-RISC R3000A R3000A R3000A
Clock Speed 33 MHz 20 MHz 33 MHz 25 MHz
Cache Size 64 KB/64 KB 64 KB/64 KB 64 KB/128 KB 64 KB/64 KB
(Ins/Data)
X11 Perf 7633 --- --- ---
2D/3D 610 153 298 285
vec/sec (k)
Memory Capacity 8/192 MB 8/40 MB 128 MB 8/40 MB
Internal Disk 2 GB 426 MB 852 MB 426 MB
Capacity
Total Disk 69.7 GB 25.2 GB 33.9 GB 25.2 GB
Capacity
Internal Bus EISA TURBO TURBO TURBO
# of Slots 1 (opt.) 2 3 2
RAM Pricing $100/MB $220/MB $220/MB $220/MB
$175/MB (16 Mbit)
Disk Pricing $4.19/MB $5.28/MB $5.28/MB $5.28/MB
@ 525 MB @ 426 MB @ 426 MB
$4.00/MB $5.20/MB $5.20/MB $5.20/MB
@ 1 GBSE @ 1 GB @ 1 GB
$4.56/MB $4.56/MB $4.56/MB
@ 1.38 GB @ 1.38 GB
Warranty 12 mos 12 mos 12 mos 12 mos
List Price for $5,690 $5,595 $9,995 $7,595
Equivalent 19G/16/Dskls 17G/16/Dskls 19G/16/Dskls 16c/16/Diskls
Configs
Graphics Intr. Intr. HX CX
SPECint92/ 4.2 2.4 2.0 2.1
$1,000
SPECfp92/ 7.9 2.6 2.4 2.3
$1,000
SPECmark89/ 8.1 2.9 2.6 2.5
$1,000
------------------------------------------------------------------------
------------------------------------------------------------------------
Mid-range Desktop System Comparisons
------------------------------------------------------------------------
HP
715/50 DEC
725/50 240
------------------------------------------------------------------------
SPECint92 36 27.3
SPECfp92 72 29.9
SPECmark89 69 32.4
MIPS 62 42.9
MFLOPS 13 6
(Linpack DP)
Processor PA-RISC R3000
Clock Rate 50 MHz 40 MHz
Cache Size (Ins/Data) 64 KB/64 KB 64 KB
X11 Perf 11,190 ---
2D/3D vec/sec (k) 920 621/-
Memory Capacity 256 MB 16/480 MB
Internal Disk Capacity 2 GB None
Total Disk Capacity 69.7/239.8 GB 33.1 GB
Internal Bus EISA TURBO
Number of Slots 1/4 3
RAM Pricing $125/MB $220/MB
$175/MB (16 Mbit) $315/MB @ 64 MB
Disk Pricing $4.19/MB @ 525 MB $5.28/MB @ 426 MB
$4.00/MB @ 1 GB/SE $5.20/MB @ 1 GB
$4.56/MB @ 1.38 GB
Warranty 12 mos 12 mos
List Price for $15,590/$20,490 $22,465
Equivalent Configs 19C/32/525 19C/32/426
Graphics CRX HX
SPECint92/$1,000 2.3/1.8 1.2
SPECfp92/$1,000 4.6/3.5 1.3
SPECmark89/ 4.4/3.4 1.4
$1,000
------------------------------------------------------------------------
------------------------------------------------------------------------
High-performance Desktop System Comparisons
------------------------------------------------------------------------
HP 730 HP 735 DEC 240
------------------------------------------------------------------------
SPECint92 51 80 27.3
SPECfp92 85 150 29.9
SPECmark89 86 147 32.4
MIPS 57.9 124 42.9
MFLOPS 23.7 40 6.0
(Linpack DP)
Processor PA-RISC PA-RISC R3000A
Clock Rate 66 MHz 99 MHz 40 MHz
Cache Size 128 KB/256 KB 256 KB/256 KB 64 KB/64 KB
(Ins/Data)
X11 Perf 10,904 19,920 ---
2D/3D 1180 1160 621/-
vec/sec (K)
Memory Capacity 128 MB 400 MB 480 MB
Internal Disk 840 MB 2 GB ---
Capacity
Total Disk 64 GB 126.4 GB 28 GB
Capacity
Internal Bus EISA EISA TUBROchannel
# of Slots 1 1 3
RAM Pricing $125/MB $125/MB $220 MB
$175/MB(16 Mbit) $315/MB @
64 MB
Disk Pricing $4.19/MB @ $4.19/MB @ $5.28/MB @
525 MB 525 MB 426 MB
$4.00/MB @ $5.20/MB @
1 GB/SE 1 GB
$4.70/MB @ $4.56/MB @
1 GB/FW 1.38 GB
Warranty 12 mos 12 mos 12 mos
List Price for #31,400 $37,390 $22,465
Equivlalent (19"/32 MB/ 19"/32 MB/ 19"/32 MB/
Configs 424 MB) 525 MB) 426 MB)
Graphics CRX CRX HX
SPECint92/ 1.6 2.0 1.2
$1,000
SPECfp92/ 2.7 3.7 1.3
$1,000
SPECmark89/ 2.7 3.5 1.4
$1,000
------------------------------------------------------------------------
------------------------------------------------------------------------
High-performance Expandable System Comparisons
------------------------------------------------------------------------
HP 750 HP 755 DEC 5900(2)
------------------------------------------------------------------------
SPECint92 51 79 27.3
SPECfp92 85 150 29.9
SPECmark89 86 147 32.8
MIPS 76.7 124 42.9
MFLOPS 23.7 38 6
(Linpack DP)
Processor PA-RISC PA-RISC R3000A
Clock Rate 66 MHz 99 MHz 40 MHz
Cache Size 256 KB/256 KB 256 KB/256 KB 64 KB/64 KB
X11 Perf 10,904 19,120 ---
2D/3D 1,180 1,160 Server
vec/sec (K)
Memory Capacity 64/384 MB 64/768 MB 64/448 MB
Internal Disk 2.6 GB 4 GB 35 GB
Capacity
Total Disk 236 GB 297.5 GB 227 GB
Capacity
Internal Bus EISA EISA TURBOchannel
# of Slots 4 4 3
RAM Pricing $125/MB $125/MB $220/MB
$175/MB
(16 Mbit)
Disk Pricing $3.85/MB @ $3.45/MB @ $5.28 MB @
1.3 GB 2 GB/SE 1 GB
$4.15/MB @ $4.56/MB @
2 GB/FW 1.38 GB
Warranty 12 mos 12 mos 12 mos
List Price for $55,650 $58,990 $49,950
Equivalent (19"/64 MB/ (19"/64 MB/ (19"/64 MB/
Configs 1 GB) 2 GB) 1.3 GB)
Graphics CRX CRX Server
SPECint92/ 0.9 1.2 0.5
$1,000
SPECfp92/ 1.5 2.3 0.6
$1,000
SPECmark89/ 1.5 2.2 0.7
$1,000
------------------------------------------------------------------------
------------------------------------------------------------------------
DECstation Graphics Comparison
------------------------------------------------------------------------
MX HX
------------------------------------------------------------------------
2D KVectors/sec 248 621
2D Area fill,
Mplx/sec 20.3 30.5
3D KVectors/sec NA NA
3D KPolygons/sec NA NA
Resolution 1280x1024 1280x1024
Graphics Planes 1 8
Pixel Stamp proc no no*
1860 accelerator no no
Price $695 $1,995
Z-buffer opt NA NA
------------------------------------------------------------------------
* The HX graphics option includes the new SFB ASIC for graphics, as
opposed to the PixelStamp.
------------------------------------------------------------------------
------------------------------------------------------------------------
TX PXG+ PXGT+
------------------------------------------------------------------------
2D KVectors/sec 252 345 445
2D Area fill,
Mplx/sec 7.7 18.5 12.3
3D KVectors/sec NA 401 436
3D KPolygons/sec NA 70 106
Resolution 1280x1024 1280x1024 1280x1024
Graphics Planes 24 8/24 96
Pixel Stamp proc no yes yes
l860 accelerator no yes yes
Price $3,995 $4,000 $15,000
(8 bit)
$8,000
(24 bit)
Z-buffer opt NA 1,000 Included
for 8 plane,
incl. on 24
plane
------------------------------------------------------------------------
[Figure: NAS 200 Contents, Caption: none]
[Figure: NAS 300 Contents, Caption: none]
[Figure: NAS 400 Contents, Caption: none]
[Figure: NAS 250 Runtime Client Package Contents, Caption: none]
PA7100 versus EV4 Alpha chip comparison
The Alpha architecture
Alpha is a true RISC architecture and a radical change from the previous
CISC design of the VAX series. It has 168 instructions, comparable to
PA-RISC 1.1, MIPS-III, and IBM's POWER. The instructions are all 32 bit
and, like all RISCs, use a simple load/store model to access memory. In
order to simplify superscalar designs, the architecture does not use
branch delay slots (in most RISC designs, the instruction after a branch
is executed while the branch target is being fetched) or condition codes
(like IBM POWER).
The architecture is simplified in many other ways. Alpha does not
support 8-bit or 16-bit loads or stores, address increment on loads and
stores, or decimal arithmetic. PA-RISC provides all of these features,
which are useful for string operations, COBOL programming, and many
other applications. Alpha also has less flexible condition testing and
memory addressing, and has no combination instructions such as PA-RISC's
"compare-and-branch". Stripping down the instruction set provides a
tradeoff: the simpler instructions allow Alpha to reach higher clock
speeds, but in situations requiring these functions, Alpha will have to
do more work to achieve the same result.
Alpha is a full 64-bit architecture, similar to MIPS' R4000. The 32
general registers and 32 floating-point registers are all 64-bits wide.
PA-RISC has the same number of registers but the integer registers are
only 32-bits wide, although the FP registers are also 64 bits. Alpha's
virtual address space is 64-bits, using a flat (unsegmented) design
similar to the R4000's. The architecture allows a physical memory space
of up to 48 bits (256,000 gigabytes). This is in contrast to PA-RISC's
64-bit segmented virtual address space and 32 bit (4 gigabyte) physical
address space.
Note: PA-RISC divides the full virtual space into 4-gigabyte
segments. Multiple objects (such as files or data arrays) of less than
4 GB each can be efficiently handled, but single objects greater than
that size must be partitioned across segments, resulting in a
performance decrease. The flat architectures are equivalent to the
segmented design for most objects, but have a performance advantage for
the very large objects, however, very few customers are using objects
larger than 4 GB today.
Instructions for both 32-bit and 64-bit operations are included; there
is no mode bit. Alpha specifies an 8 KB page size, twice that of PA-
RISC 1.1. This larger page size can increase the efficiency of the TLB
but can lead to wasted space when dealing with smaller objects.
Alpha includes a few features to ease the transition from VAX systems.
Both architectures are little-endian, allowing data files to be easily
exhanged from VAX to Alpha, and also to ACE systems and Intel PCs.
(SPARC, PA-RISC, POWER, and some MIPS systems are big-endian.) The
Alpha chips will also support both standard IEEE floating-point and VAX
floating-point formats. Despite these features, all VAX software must
be recompiled for Alpha to achieve full performance.
To summarize, Alpha is similar to most RISC architectures, but has
been simplified to the extreme. This allows DEC to design very high-
clock rate superscalar processors (see next section), but these
processors will need to execute up to 30 percent more Alpha instructions
to complete the same tasks as a PA-RISC processor. (For a more detailed
analysis, see "Pathlength Reduction Features in the PA-RISC
Architecture" by Ruby Lee, et al, COMPCON 2/92.) As a result, Alpha
chips may have higher native MIPS ratings than PA-RISC chips but achieve
similar benchmark results.
The Alpha CPU
The Alpha CPU (code-named EV4) is a single chip design including CPU,
floating point, and 8 KB each of instruction and data cache. The chip
is two-way superscalar, able to fetch and execute up to two instructions
per clock cycle. It uses a superpipelined design to achieve frequencies
up to 200 MHz. The chip supports an optional external cache at 1/3 to
1/8 of the internal clock rate. The external cache can be configured
from 128 KB up to 8 MB and requires an additional VLSI chip for the
control logic.
The superscalar design of the EV4 is more flexible than HP's PA7100
chip. The DEC design includes four independent units to handle
load/store, branch, integer math, and floating point math respectively.
Each cycle, the chip will fetch two instructions and can begin executing
both so long as each of the two are sent to different units, with only a
few exceptions. (Integer store cannot be paired with FP math, nor can
FP store be paired with integer math.) The PA7100 uses a similar
algorithm but has only two independent units, one for integer and the
other for floating point. The flexibility of the EV4 will provide a
significant advantage in integer-only code. The PA7100 does have a
smaller advantage by combining certain integer math and branch
operations into a single instruction, as well as in the floating point
add-and-multiply combination. (Alpha does not include any of these
combination instructions.) Neither chip supports speculative or out-of-
order execution.
Figure 1: Pipeline Comparison
------------------------------------------------------------------------
EV4 PA7100
------------------------------------------------------------------------
Cycle time 5 ns 10 ns
Pipeline depth 7 stages 6 stages
Penalties:
Shift 1 cycle 0 cycles
Load-use 2 cycles 1 cycle
Bad branch 3 cycles 1 cycle
Off-chipcCache 3 cycles* 0 cycles
FP latency 6 cycles 2 cycles
------------------------------------------------------------------------
* At 1/3 clock ratio; up to 8 if lower clock
------------------------------------------------------------------------
At first glance, the 200 MHz operation of EV4 appears far superior to
the 100 MHz achieved by the PA7100. However, the superpipelined design
results in significantly higher pipeline penalties (see Figure 1). Like
a dragster, the EV4 is fast in the straightways but doesn't corner very
well. The superscalar design compounds these penalties, since up to two
instructions are stalled for each penalty cycle.
To help mask the three cycle penalty for bad branch prediction, the
EV4 uses a combination of static and dynamic branch prediction. Some
Alpha instructions contain branch hints, and subroutine calls place the
return address on a four-entry return stack. Dynamic branch prediction
is done using a 2048-entry branch history table which works on the
theory that a branch will tend to go in the same direction as it did the
previous time. The PA7100 does only static branch prediction, achieving
a 70 percent--80 percent success rate. The DEC approach should reach
over 90 percent, but the larger branch penalty will produce a net
negative comparison.
The small (8 KB), direct-mapped caches on the EV4 chip will produce a
high miss rate on many applications (particularly commercial or
multiuser), meaning that the CPU will spend much of its time refilling
the on-chip cache. This problem is exacerbated by the superscalar
design; at two instructions per cycle, a cache miss could be generated
every few cycles. The PA7100 can access its external cache with no
penalty, and typically will have much larger caches than the EV4's on-
chip cache, reducing the miss rate to a few percent. The DEC chip uses
"hit under miss" and "critical word first" algorithms to reduce the on-
chip cache miss penalty in some cases. (The PA7100 also uses these
algorithms.) A four-entry write buffer is used to hide delays caused by
stores to the external cache.
Another disadvantage of the EV4's on-chip cache is that it increased
the chip's die size by 15 percent, resulting in higher cost. The EV4 is
designed to interface to either ECL or CMOS SRAM; since the chip
requires 8 ns SRAM for secondary cache at 200 MHz (66 MHz external),
expensive ECL SRAM may be required to reach that speed. Standard CMOS
10--12 ns SRAM are used at lower clock frequencies. The PA7100 achieves
100 MHz cache accesses using standard 9 ns CMOS SRAM by using a
sophisticated circuit design which begins the next access while the
current access is still being completed. This overlapped access
technique allows higher cache access rates with a less expensive CMOS
cache.
EV4 implements a 43-bit (8, 192 GB) virtual address space and a 34-bit
(16 GB) physical address space as a subset of the full architecture.
The TLB is on-chip and contains 16 instruction entries and 32 data
entries. Four of the ITLB entries map 4 MB blocks. The DTLB entries
are all configurable to map anywhere from 8 KB (one page) to 4 MB (512
pages) each. This TLB is much smaller than the PA7100's 136-entry
unified TLB, but only 16 entries in the HP design are configurable to
map large blocks of memory (512 KB to 64 MB). Unlike the PA7100, the
EV4 does not implement a hardware TLB update algorithm.
The Alpha chip is implemented using 1.68 million transistors. The
on-chip cache consumes about 900,000 of those using 6 transistors per
bit. This includes some redundant rows, which can be used to replace
failed cells. DEC may use this redundancy to mask field failures or to
improve the fabrication yield. The logic functions on the chip consume
the remaining 780,000 transistors, slightly less than the 850,000 used
in PA7100.
Figure 2: IC Process Comparison
------------------------------------------------------------------------
HP DEC
CMOS-26B CMOS-4
------------------------------------------------------------------------
Gate (drawn) 0.75 micron 0.75 micron
Gate (effective) 0.61 micron 0.50 micron
Gate thickness 160 Ang 105 Ang
Metal 1 2.6 microns 2.25 microns
Metal 2 2.6 microns 2.65 microns
Metal 3 6.0 microns 7.50 microns
Power supply 5.0 V 3.3 V
------------------------------------------------------------------------
The EV4 uses DEC's 0.75 micron CMOS-4 process. This process is about a
half-step ahead of HP's CMOS-26B process (see Figure 2), which is used
for the PA7100. The effective gate length and gate thickness are
significantly better than 26B, but the metal layers are about the same.
Since DEC is already using CMOS-4 for the current NVAX CPU (VAX 6600
series), there is little manufacturing risk.
Figure 3: Chip Cost Comparison
------------------------------------------------------------------------
EV4 PA7100
------------------------------------------------------------------------
Die width 16.8 mm 14.0 mm
Die length 13.9 mm 14.0 mm
Package type 431 pin 504 pin
Cost ratio $1.43 $1.00 (estimated)
------------------------------------------------------------------------
The chip area is about 20 percent larger than the PA7100; combined with
an estimated 10 percent higher wafer cost, this indicates a 43 percent
higher chip cost (see Figure 3). DEC has announced that the EV4 will be
sold as the 21064-AA for $1,557 in volume for the 150 MHz version; most
RISC CPUs are sold for well under $1,000. (The PA7100 is not sold on
the open market.) The EV4 also has a high power dissipation of 30 Watts
at 200 MHz, despite using reduced (3.3V) power levels; this is about 50
percent more power usage than a 100 MHz PA7100. DEC will "bin" to reach
its 200 MHz goal, since only a small percentage of chips work at that
speed. The slower chips will be used in a lower-performance system at
150 MHz.
Although DEC has not announced the SPECmark performance for the EV4,
several articles have reported 140-150 SPECmarks, which would be about
10 percent-15 percent higher than the PA7100 at 100 MHz. The chip will
have better integer SPECmark performance than the PA7100 due to its more
flexible superscalar design and branch prediction, although the
penalties to access the off-chip cache may hamper EV4 in large
applications such as OLTP. Floating-point SPECmark performance will be
less than HP's because of the long FP latencies, but vector performance
could be higher if the on-chip cache is used exclusively.
VAXcluster configuration overview
A VAXcluster system is a single, large system made up of several VAX
processors running the VMS operating system, and configured with
globally shared, mass storage subsystems. DEC developed VAXcluster
systems in order to allow the sharing of data, disks, printers, and the
computers themselves among users, across multiple CPUs.
VAXcluster configurations
VAXcluster systems are configured with the following groups of
components:
o CPUs -- VAX processors, from desktop systems to the VAX 9000 mainframe
system, can be members of a VAXcluster system.
o Interconnects -- There are four types of interconnects that are
currently used in VAXcluster configurations:
- Ethernet
- DSSI
- CI
- FDDI
o Storage Subsystems
VAXcluster configurations
o Ethernet-based VAXcluster systems: Ethernet can be used to connect
VAX systems and VAXstations in an Ethernet-based, or local-area
VAXcluster. An Ethernet-based VAXcluster configuration can be
connected to CI or DSSI configurations to create larger mixed-
interconnect VAXcluster systems.
[Figure: Ethernet-Based VAXcluster System, Caption: none]
o DSSI VAXcluster systems: The Digital Storage Systems Interconnect
(DSSI) can be used to connect up to three VAX systems in a DSSI
VAXcluster. This three-system DSSI VAXcluster allows all three
systems to share disks across the DSSI. Data and applications remain
available to all users in the VAXcluster, timesharing, or server, even
if two of the three systems go down. Systems in a DSSI VAXcluster can
include three VAX 6000 systems, three VAX 4000 Model 300 or 500
systems, any combination of those, or one VAX 4000 Model 300 or 500
and any two Q-bus systems. Some MicroVAX and VAX 4000 systems can
include multiple DSSI buses in VAXcluster configurations.
[Figure: DSSI VAXcluster System, Caption: none]
o CI VAXcluster Systems: The Computer Interconnect (CI) can be used to
connect up to 32 multiuser VAX VMS systems and terabytes of data
storage in a CI VAXcluster. CI VAXclusters can also configure up to
96 systems including VAXstations.
[Figure: CI VAXcluster System, Caption: none]
Note: The STAR stands for "Star Coupler". The Star Coupler is the fully
redundant common connection point for all VAXcluster nodes connected to
the CI. It connects together all CI cables from the individual nodes
into a radial, or star, arrangement that has a maximum radius of 45 m.
The Star Coupler can be configured to support VAXcluster systems of up
to 32 CI interfaces (90 meters maximum between nodes).
HSC stands for "Hierarchial Storage Controller". It is a self-
contained, intelligent, mass storage subsystem that connects one or more
host processors to a set of mass storage disks and/or tapes.
o FDDI VAXcluster System: The VAXcluster Multi-Datacenter Facility
extends VAXcluster systems across multiple geographic locations, and
serves as a platform for disaster tolerance. If a problem occurs at
one site, the other site or sites can continue to process critical
applications. FDDI's bandwidth (100 Mbit/second) and allowable
distance between systems (up to 40 Km) make it especially important as
the interconnect in a disaster-tolerant VAXcluster system.
[Figure: FDDI VAXcluster System, Caption: none]
Note: The single point of failure for these VAXcluster configurations
are the "star couplers" Which are the interface between the VAXs being
clustered and the HSCs interfaced with the disks.
Also, keep in mind that the degree of system availability of the
cluster is determined by the level of hardware redundancy in the system,
that is the HSC redundancy.
VAXcluster benefits common to all these configurations:
o Single-point system managment across multiple systems or across
multiple geographically dispersed sites.
o Expandability of bandwidth, number of systems in the VAXcluster,
storage devices, distance between systems or sites with no application
rewrites.
o Disaster tolerance for business-site protection.
o Shared mass storage. All CPUs in a VAXcluster system share all
storage devices whether on an HSC subsystem or a CPU.
o Shared batch and print resources with the VMS job controller providing
load balancing across all VAXcluster system CPUs.
o Shared access to all disks in the cluster, providing access to all
applications and utilities in the VAXcluster system.
o Cluster system-wide standard VMS system and security features.
Note: If a CPU in the cluster goes down, users logged onto that system
must relog onto another CPU in the cluster which will have access to the
shared storage containing the application and the data the users were
using at the time of CPU failure.
VAXcluster disadvantages:
VAXclusters are very expensive. One needs to consider that not only are
there extra CPUs to buy, but also other redundant components to consider
that will eliminate single points of failure, as the typical
configuration examples above illustrate.
In addition to acquiring the additional hardware components, there are
several pieces of additional software required to maintain and manage
the VAXcluster. These software pieces include:
o A distributed file system that allows all VMS processors in a
VAXcluster system to share disk mass storage to the record level.
o A distributed lock manager: The VMS Distributed Lock Manager is a
tool for synchronizing access to resources for processes. The
resources can reside on a single CPU or in a VAXcluster system. The
Lock Manager provides a namespace in which processes can lock and
unlock resource names.
o The MSCP Server: The MSCP (Mass Storage Control Protocol) is a
protocol for logical access to disks and tapes. The VMS MSCP Server
implements the disk-only portion of this protocol. It permits any VAX
processor in the VAXcluster system to access disks that are connected
locally to another VAX processor VAXcluster node.
System administators need to be trained to use this software, which
several customers complain is very complex to use. In addition, there
is a lot of system overhead required to run these tools to manage the
VAXcluster.
If one considers cost, you can easily see how at a very minimum 2
system configuration, the solution price is more than doubled in
hardware/software and system administration costs.
Finally, keep in mind that these VAXclustering products are not based
on any standards and are proprietary products that DEC uses to lock
customers into VAX solutions.
NetBase clustering solutions for the HP 3000
NetBase is a comprehensive software solution consisting of five major
services (that can be used alone or together). Provided by QUEST in
Newport Beach, Calif, NetBase offers functionality that can benefit
high-end customers with distaster-recovery needs as well as customers
with the need for loosely coupled systems. This provides a unique
capability for the HP 3000. In fact, NetBase is the cornerstone of an
ongoing direction that HP will continue to develop that will strengthen
HP's ability to provide loosely coupled systems. We will work with
QUEST to tie more of their technology to our technology over time.
The five services that make up NetBase are listed below. Each service
is discussed in more detail later in this document. In addition, if you
have questions about NetBase you may call QUEST directly at (714) 720-
1434 (and ask for technical support).
1. NetBase Network File Access (also known as Central File Directory) -
Gives applications transparent access to data and programs on other HP
3000 systems.
2. NetBase Spooling - A network spooling product for HP 3000 systems
that controls and and distributes all spooling activity.
3. NetBase Shadowing - Automatically maintains copies of data throughout
a network. This feature is key because it helps deliver disaster
tolerance.
4. NetBase Statistics - A performance tool that captures data at a file
and user level.
5. NetBase AutoRPM (Remote Process Management) - Gives users transparent
access to programs located on remote machines.
When NetBase can help you close a deal
NetBase is a unique capability for the HP 3000 and you should learn to
recognize when it is appropriate to bring QUEST in on your sales cycle.
When you think it may be appropriate, call QUEST at (714) 720-1434.
For Installed-Base Customers:
High availability and disaster tolerance is required: This is one of
NetBase's most significant benefits. Using shadowed data on
geographically dispersed systems, customers are protected in the event
of a natural disaster in one location. If System-A fails, they can
relog on to System-B (which resides elsewhere) and have access to the
redundant applications and data. This is a differentiator vs. DEC (see
the information on DEC VAXclusters).
Multiple CPUs exist on a network with NS performance problems:
NetBase can actually improve network performance because it is very
efficient about packing the data together to minimize the network
traffic.
Batch and online jobs are competing for resources: By moving some
applications to other systems, or by letting batch jobs run against
shadowed data on another system, you can use NetBase to reduce resource
contention. For example, you can create specialized servers for batch
reporting, database engines, or print servers. This can be an effective
way to introduce sales for additional hardware.
Single CPU is overloaded: NetBase can take part or all of an
application and move it to another system, improving performance on the
first system. No application changes are necessary. This customer
scenario is the most difficult to qualify. In general, it's necessary
for the application to be modular or for the customer to have multiple
applications on the system in order for NetBase to be an effective
answer. If the system is running just one large application against one
large database, NetBase may not be able to help. When in doubt, give
them a call. There are some situations they can quickly assess, and
others they will have to investigate.
New business deals:
Mainframe offloading: The qualification process here is the same as in
the point above (single CPU overload). If the applications on the
mainframe are modular (or are being replaced with a modular
application), NetBase can enable the customer to distribute the programs
and data where they make the most sense geographically. Yet to users,
it will still look like one large system.
Mainframe replacements: Same as above. Disaster tolerance is
required: Same as in the high-availability point above. The key thing
to remember here is that NetBase is a cost-effective disaster-tolerance
solution, and is better than what DEC clusters and the IBM AS/400 can
offer.
Positioning NetBase with SPU Switchover/iX and Mirrored Disk/iX
SPU Switchover/iX is an effective solution for people who want one
system to backup another system on that same site in the event of
failure. After a failure, System-B takes over System-A's HP-FL disks.
When System-A is up again, users simply relog onto System-A.
NetBase, on the other hand, is an effective solution for people who
want one system to backup a system on another site, over a WAN. In the
event of a failure, users log on to System-B and use the shadowed data.
Thus it is the shadowed data that is being updated now, not the original
data on System-A.
With both SPU Switchover/iX and NetBase, the operator must initiate
the switchover process.
Mirrored Disk/iX is similar in that it is suitable for data-
availability protection within one site. However, for data-availability
protection across the network, NetBase is a better solution.
How NetBase compares to DEC VAXclusters. NetBase is very different
than a DEC VAXcluster, although it provides some of the same benefits.
Customers who use clusters have typically identified three main benefits
they wanted that led them to choose a cluster.
1. High-end performance growth. DEC was lacking a high-end
multiprocessing system in the '80s, and VAXclusters were their way of
providing more performance. So DEC will try to position a VAXcluster
that consists of 8 machines as being one system with the processing
power of all those machines combined. Since DEC came out with its
multiprocessing systems, they have not bid clusters as aggressively.
In reality, VAXclusters don't offer linear performance improvements with
each system that is added on. Neither does NetBase. HP provides
performance growth through its cost-effective, high performing PA-RISC
systems. DEC still has to wait for its Alpha project to have RISC
commercial systems.
Still, don't position NetBase against a cluster when the customer
needs performance growth for a single (nonmodular) application. Bid HP
3000 PA-RISC systems. We estimate that clusters have an edge over
NetBase with regard to overall performance because all the systems
logically share the same disk farm, and are connected with high-
performing cables. In fact, the disk controller for the disk farm is
actually a PDP-11 computer. NetBase, while it provides distributed data
sharing, must go through an HP 3000 to get to the disk. It uses
standard networking, and is slightly slower as a result. It is probably
more cost effective than a cluster, but probably doesn't have the same
performance.
2. Loosely coupled data sharing and load balancing. Both clusters and
NetBase offer this benefit, with different implementations. Clusters
enable systems to access the same set of disks, through star couplers
and redundant disk controllers. These components have a distant
restriction of 5--10 meters. So true distributed data sharing across
a WAN is not possible.
This diagram illustrates the concept of shared disk farms:
[Figure: Shared Disk Farm Configuration, Caption: none]
NetBase enables users to transparently share the data that is spread
across multiple HP 3000 systems on a network. It enables a customer to
fit the systems to the business, rather than fitting the business to the
systems.
This diagram illustrates the concept of users being able to
transparently access databases anywhere on the network, which is how
NetBase works. Notice how this differs from cluster's shared disk
farms:
[Figure: NetBase Transparent Database Access, Caption: none]
Clusters will load balance several ways. When a cluster gets a batch
job, it will look for the least-busy system and assign the batch job to
it. Interactive session load balancing is possible if the cluster has a
front-end terminal processor or front-end system (a more expensive
cluster configuration).
NetBase provides load balancing by enabling customers to predefine
where different types of jobs and applications will run. For example,
the local system could be reserved for heavy OLTP use during the day,
while daytime batch jobs could be spooled out to run remotely on another
system. During a year-end crunch, the system operator could "borrow"
another system's resources for a week or two to offload the first
system. NetBase's load balancing is flexible, because it works through
a central directory that does not require the applications themselves to
be modified.
Neither NetBase nor clusters provide truly dynamic load balancing.
3. High availability. DEC pushes this benefit heavily, stating that
clusters have "no single point of failure". And it's true that
clusters can be configured so that every component is redundant.
Sometimes the customer will put a fault-tolerant VAX between the users
and the HSC controller.
Clusters do a good job at providing high availability, but they are
expensive. And they CANNOT provide this benefit over a WAN. Thus, they
cannot position the cluster as a true disaster-tolerant solution. If
the cluster in Kansas is hit with a tornado, the customer is at risk.
Gartner Group has written a fact sheet stating that they consider
clusters that are linked over an FDDI (high speed) network to be a
disaster-tolerant solution. But be aware that the FDDI has a distance
limit of about 50 km. If those two clusters had been in northern
California during the '89 earthquake, they would have BOTH been down.
NetBase, on the other hand, provides redundancy over a WAN through
data shadowing. So when the HP 3000 in Kansas is hit with a tornado,
the users can relog on to the system with the shadowed data in San
Francisco. Then if the system in San Francisco is hit by an earthquake,
the users can relog on to the system in Los Angeles with ANOTHER
shadowed copy of the data. And if the system in LA goes down due to
riots, they can log on to yet ANOTHER system in Florida with yet ANOTHER
copy of shadowed data. There is no distance limit. NetBase is the
solution for GLOBAL disaster tolerance.
Don't underestimate the value of this feature. One of NetBase's
customers, Discount Corp. of New York, is a trading firm in NY. All of
NYC suffered from an electrical failure one day, and all of the trading
firms were frozen. But Discount Corp. of New York had been shadowing
data to New Jersey all along. Since their users were completely without
power in NY, they literally put their traders on a bus and rushed them
to the NJ location. Soon, they were trading again and they were the
ONLY trading firm that was able to continue working that day. That
ability resulted in a lot of incremental revenue for them.
NetBase Network File Access
NetBase Network File Access (NFA) is also known around HP as Central
File Directory. It is high-performance networking software that gives
applications transparent access to data and programs on other HP 3000
systems. NetBase NFA offers increased networking performance and
provides some level of horizontal system growth. Spreading applications
across multiple machines isolates application usage, increases
throughput, and offers maximum flexibility in upgrade paths. If
additional processing power is needed, adding another system becomes
simple.
Using NetBase NFA, an application designed to run on a single computer
can easily be distributed across multiple computers without having to
change a single line of application code. NetBase maintains a
centralized directory of all files and databases that are available to
network users. Once entered into the directory, all requests to that
file or database are directed to the appropriate machine.
Features of NFA: Benefits of NFA:
------------------------------------------------------------------------
Remote file access without Low use of system resources
remote session or NS/3000 Simple to implement
Dynamic self-tuning for best Immediately improves
performance performance for users of
No application changes needed RFA and RDBA
Comprehensive network security Minimizes data communications
Online network status displays Comprehensive network security
Supports all MPE file types overhead
Online user application tracing Simplifies network management
Optimizes use of the network
Dynamic node failure recovery
------------------------------------------------------------------------
NetBase spooling
Network spooling is a standard NetBase feature. This makes it both
simple and practical to have your programs output to any spooled printer
anywhere in your network. Simple, by providing you with several
transparent methods of defining the printing environment. Practical, by
performing the task with a minimal amount of overhead.
NetBase spooling transports multiple lines of output per network
transaction. Sending the output in this manner drastically reduces
networking overhead, yielding over 15 times the performance across the
LAN.
Other features include integration with Novell NetWare, bidirectional
transfers to HP-UX, and complete automation of any repetitive tasks.
Features of NetBase spooling: Benefits of NetBase Spooling:
------------------------------------------------------------------------
Dynamic device mapping and Increases application
performance
direction Centralized printing
Spools directly to remote Shared print resources
systems Simplified operations
Automates report distribution Scalable to print volumes
Easy-to-read and flexible Reduces paper and printer cost
banners Network and system fault
Online viewing of spool files tolerance
Automatic spool file archiving
Fast spool file search for error
recovery
Supports IBM, HP-UX, and NetWare hosts
------------------------------------------------------------------------
NetBase shadowing
NetBase shadowing replicates data in real time to multiple machines
(more than 12). It maintains redundant copies of files and databases on
the network that can be useful for disaster recovery, increasing
performance, off-loading busy machines, providing online backup, and
providing 24-hour uptime.
By giving users concurrent access to both master and shadow copies of
data, NetBase shadowing lets you off-load busy systems. Batch reports
and inquiry access can be executed on the shadow computer providing true
load balancing with better throughput.
NetBase shadowing was designed to function through all modes of
operation and failure. This includes backups, network failures, and
system failures. If the shadow computer is unavailable, NetBase will
automatically queue all updates on the master computer until the shadow
machine becomes available.
Disaster recovery is simplified if multiple copies of data exist on
the network. Should a machine go down, all file access can be
redirected to another computer almost instantly, bringing an otherwise
unavailable application back online in an incredibly short period of
time -- literally in a few minutes.
Features of NetBase Shadowing: Benefits of NetBase Shadowing:
------------------------------------------------------------------------
Real-time replication of data Provides 24-hour data center
across systems uptime
Provides concurrent backup Reduces network traffic
capability Spreads file and database I/O
Uses low overhead transport across nodes
algorithms Quick disaster recovery
Shadows both TurboIMAGE and Offloads batch reporting jobs to
ALLBASE/SQL databases second CPU to reduce resource
Guarantees synchronization contention
between the master and the Easy to configure
shadow system
Supports multiple and partial
shadow copies
Works with all supported HP
network links
User exits increase functionality
and flexibility
------------------------------------------------------------------------
NetBase Statistics
While most performance tools provide a global picture of system health,
NetBase statistics capture data at the file and user level. File access
data, process overhead, and response times are captured to quantify an
application's use of system resources. With this level of information
available, application monitoring, profiling, and tuning are simple. In
fact, network strategies can be modeled in advance with NetBase
Statistics Scenario Generator.
Standard, easy-to-read reports are provided with NetBase Statistics.
All data is kept in a documented log file format, so users can produce
their own reports with the 4GL or language of their choice. Statistics
can be enabled and disabled at any time.
Features of NetBase Statistics: Benefit of NetBase Statistics:
------------------------------------------------------------------------
Provides statistics of file access Points out poor locking strategies
by program and by user Simplifies performance analysis
Profiles application activity by Uncovers application inefficiencies
intrinsic, program, and user Reveals all network file access
Predicts performance of network activity
changes. Can be turned on or off at any
Generates reports at time
varying levels Simple to use
of detail
Advanced modeling capability
for "what if" scenarios
Very low overhead
------------------------------------------------------------------------
NetBase Auto RPM
AutoRPM (Remote Process Management) gives users transparent access to
programs located on remote machines. With one command, users gain
instant access to virtually any software on the network. AutoRPM
automatically transports the user to the appropriate computer, reducing
the need for duplicating data and applications.
Features of NetBase AutoRPM: Benefits of NetBase AutoRPM:
------------------------------------------------------------------------
Provides transparent program Simplifies network application
access across the network integration
Instantly redirects users to Easy to manage
remote applications Reduces software and data
Handles all remote session redundancy (which also reduces
management software costs)
Transports file equations and Transparent to users
other session environment
information
No changes to application
------------------------------------------------------------------------
RISC architecture overview and competitive comparison
PA-RISC delivers 64-bit functionality
In 1986, HP became the first company to ship a RISC processor
architected for 64-bit virtual addressing, a feature which has been
included in all PA-RISC processors. Today's PA-RISC chips include many
64-bit features such as 64-bit registers, 64-bit loads and stores, and
64-bit addressing, which are similar to those used by DEC Alpha and the
MIPS R4000, among others.
PA-RISC uses a cost-effective approach to deliver 64-bit functionality
to the customer. In some parts of its chips, 32-bit registers and data
paths are used to reduce the cost of the processor and deliver better
price/performance to the end user. These areas have been carefully
selected so that they impact few (if any) customers, while the cost
savings are shared by all. As customer needs change, HP will continue
to optimize its products to deliver needed functionality at the best
possible price.
Keep in mind that 64-bit features are not a measure of performance.
It is a myth that 64-bit chips are "twice as good" as 32-bit chips. For
a few customers with special needs, 64-bit features may enable them to
do their work better or faster. For the vast majority of customers,
these 64-bit features will never be used.
64-bit addressing
There are two types of addresses, virtual and physical. Virtual
addressing determines the maximum amount of data that can be used by the
system at any given time, and physical addressing determines the maximum
amount of memory that can be installed in the system. We will deal with
these issues separately.
When a user needs to access data, whether it is a file, a set of
numbers in a spreadsheet, or the result of a single calculation, virtual
addressing is used. For typical applications, a single user may need
several megabytes (MB) of data, although a large application may require
as much as 100 MB. Since even a 32-bit processor has up to 4 GB (4,096
MB) of virtual address space, it is very difficult for a single user to
exceed the total virtual space available. On a multiuser system,
however, all data requirements of all active users must fit into the
virtual address space. Thus, a system with 50-100 users may exceed the
address limits of a 32-bit processor, depending on the needs of each
user. If this happens, no new users can access the system.
PA-RISC solves this problem by extending the virtual address space to
64-bits using segments. With this method, each user (or process) is
assigned a "segment", or section of memory, of up to 4 GB in size.
Since the system has literally billions of these segments available,
even very large systems will not run out of segments. And the segments
are large enough that very few users will need more than one. IBM's
POWER architecture uses a similar method with smaller 256 MB segments.
Both DEC's Alpha and the R4000 solve the 32-bit address limit by using
a "flat" (unsegmented) 64-bit address space. Using this method, users
can be allocated more than 4 GB of space if needed. Today, very few
applications need this much space. Even in situations where more than 4
GB is needed, PA-RISC will assign multiple segments to that process,
although some performance overhead is required to switch between the
segments.
PA-RISC does not use a flat 64-bit virtual address space because the
segmented method is less expensive. Using segments reduces the size of
the register file and the TLB, allowing PA-RISC chips to be smaller (and
thus less expensive) than comparable chips using a flat address space.
Allowing user programs to use 32-bit pointers also means that these
pointers take up less space (than 64-bit pointers) when stored in
memory, improving the efficiency of the data cache. Although the flat
method offers some performance improvement to a very small number of
customers with extremely large applications, HP does not want the vast
majority of customers to have to pay for a feature that only a few will
use. When more customers begin to need larger segments, HP will deliver
a flat 64-bit processor, since the changes required to do so are fairly
minor (as shown by MIPS' evolution to the R4000).
There is also the issue of physical addressing. The PA-RISC
architecture supports up to a maximum of 4 GB of physical memory. Most
systems (including competitors') are limited to a much smaller number
due to hardware limitations. In particular, at today's prices, the cost
of the DRAM alone for 4 GB of memory would be over $500,000, which would
not be feasible except for a large mainframe or supercomputer. With the
price of memory falling by about 40 percent per year, 4 GB of memory
will be feasible for high-end mini-computers by the middle of the
decade, and possibly for high-end workstations by the end of the decade.
HP will expand the physical address range of PA-RISC when our customers
require this feature.
As a final note, although the PA-RISC, MIPS, and Alpha architectures
all support a 64-bit address range, it is common for the actual chips to
use a "subset" of the full range. For example, the first Alpha CPU only
implements 43 bits out of 64. HP is the only company to ever ship a
full 64-bit virtual address implementation, which begin shipments in
1989. Furthermore, in order to actually use more than 32-bit
addressing, the MIPS and DEC operating systems must be modified. Today,
neither company is shipping a 64-bit operating system, but HP operating
systems today use 64-bit segmented addressing. The final section shows
the address ranges specified and actually implemented by the various
vendors.
Other 64-bit issues
There are several other areas of the chip that can be specified as 64-
bits. One critical area is the floating-point unit (used for scientific
calculations). There is a growing trend in technical application
toward highly accurate double-precision (64-bit) calculations. PA-RISC,
like Alpha and others, uses 64-bit floating-point registers and can
perform double-precision operations very quickly. These registers can
store very large numbers (as big as 10 followed by over 300 zeros) and
can perform calculations with an accuracy of better than one part in
nine quadrillion.
On the integer side, PA-RISC implements 32-bit registers while Alpha
and the R4000 use 64-bit registers. These 32-bit registers can store
values as large as 4,200,000,000. HP has found that customers who use
even larger numbers or even more precise calculations use the floating-
point unit. Thus, there is no need to add the extra cost of larger
integer registers. The PA7100 chip can perform 64-bit floating point
calculations (add, subtract and multiply) in just two clock cycles, so
there is little penalty, and often a significant speedup, for doing
precise calculations on the floating-point side.
A final issue is moving data into and out of the CPU. Since
processors can spend 20 percent--30 percent of their time moving data,
it is an advantage to be able to move lots of data quickly. The PA7100
can load 64-bits per clock cycle from either the instruction cache or
the data cache. (This is also true for store operations.) Thus, Alpha
and the R4000 have no advantage in this area.
PA7100 versus the competition
In general, the new PA7100 is very similar to the new DEC Alpha and the
MIPS R4000. The only differences are in the segment size and the
integer register file; in each case, the PA7100 implements only 32 bits
to reduce costs, since almost no customers will use the extra bits, as
explained above. All other PA-RISC processors implement the same 64-bit
features as the PA7100 except for instruction loads, and some implement
the full 64-bit virtual space, while the DEC Alpha and R4000 are not the
same as previous CPUs from those vendors.
The IBM POWER processors are very similar to the PA-RISC processors,
except that the segments are slightly smaller. The older MIPS (R3000
and previous) and SPARC (SPARC 2 and previous) processors are basically
32-bit chips. Sun has not released full details on the forthcoming
SuperSPARC chip, but it is believed to be similar to SPARC 2 except for
a new 64-bit floating-point and instruction loads.
------------------------------------------------------------------------
HP DEC
------------------------------------------------------------------------
Feature PA7100 Alpha
Instruction size 32 32
Architecture:
Max virtual address 64 64
Max segment size 32 62
H/W implementation:
Max. virtualaAddress 48 43
Max. segment size 32 39
Max. physical addr 32 34
Operating System:
Max virtual address 64 N/A
Integer register size 2 64
Floating-point regs 64 64
Instruction load width 64 64
Data load/store width 64 64
------------------------------------------------------------------------
The table above shows a comparison of HP and DEC RISC chips. The left
column shows the various features that were discussed in this paper,
while the numeric entries show the number of bits supported for each of
the features. For example, the first line shows that all of the
processors use 32-bit instructions. "Max. virtual address" is the total
size of the virtual address space implemented. The "Register size"
lines show the width of the integer and floating-point registers. The
final section shows the number of bits of either instruction or data
that the processor can load per clock cycle.
Conclusion
The PA7100 (and other PA-RISC chips) use a cost-effective approach to
deliver 64-bit functionality to the end user. In this way, PA-RISC
customers do not pay for 64-bit features that they do not need. In the
future, the PA-RISC architecture will continue to evolve to meet the
needs of its customers. the 64-bit features are not a measure of
performance, and most applications do not require or use 64-bit
features. The best measure of performance is still a benchmark, or
better yet, running the target application itself.
Note: The information contained here has been verified where possible
with the manufacturer's published literature, but Hewlett-Packard
assumes no responsibility for any errors. Trademark names are used in
an editorial fashion only with no intent to infringe.
HP 9000 versus DEC DECsystem/VAX Cost of Ownership
------------------------------------------------------------------------
16-User configuration
------------------------------------------------------------------------
List List
HP 9000 Model F10 $11,250 DECsystem 5000 $15,715
16 MB RAM, 566 MB Disk, incl Model 240 incl
2.0 GB DDS, LAN, Console, incl 16 MB RAM, Tape drive incl
Multifunction I/O card, incl LAN incl
8-user OS incl 665 MB disk 6,700
16-user license 3,150 16-user OS 1,470
________ ________
Total H/W and S/W cost $14,400 $23,885
3-Year support $8,030 $11,026
3-Year C.O.O. $22,430 $34,540
------------------------------------------------------------------------
------------------------------------------------------------------------
128-User configuration
------------------------------------------------------------------------
List List Support
HP 9000 Model G-30 $20,000 VAX 6000, Model 610$202,336 $21,240
96 MB RAM, 2.0 GB disk incl System with OpenVMS incl
2.0 GB DDS, Console, incl Base OS Lic., LAN incl
566 MB disk, incl 128 MB RAM incl
32 MB RAM, LAN incl 2.0 GB disk 7,952 2,280
64 MB RAM addition 6,400 128 user lic. 33,792 17,040
2.0 GB disk replacment 4,700 Tape drive 7,084 2,736
128-user license 17,875 Console 655 96
________ _________ ________
Total H/W and S/W cost $48,975 $251,829
3-Year support $17,047 $43,392
3-Year C.O.O. $66,022 $295,221
------------------------------------------------------------------------
Or:
------------------------------------------------------------------------
List
DECsystem 5900, 64 MB, $59,496
RAM, 1.38 GB disk, LAN, incl
4-user lic., media/doc. incl
128-user OS 2,746
Tape drive 2,040
_________
Total H/W and S.W cost $74,362
3-Year support $26,712
3-Year C.O.O. $101,074
------------------------------------------------------------------------
------------------------------------------------------------------------
256-User configuration
------------------------------------------------------------------------
List List Support
HP 9000 Model H-50 $72,000 VAX 6000, $361,768 $38,160
Model 620 Sys.
64 MB RAM, 1 GB disk, incl with Open VMS Base Lic.incl
Console, LAN, tape drive, incl LAN incl
Media/doc. inc 128 MB RAM incl
192 MB RAM 19,200 4 GB disks 21,520 4,560
4 GM disk 13,800 1 GB disk 5,680 1,560
256-user lic. 22,0750 256-user lic. 33,792 5,400
Tape drive 8,868 2,736
Console 665 96
Media and doc. incl 6,960
_________ _________ _________
Total H/W and S/W cost$127,075 $432,293
3-Year support $38,558 $55,368
3-Year C.O.O. $165,633 $487,661
------------------------------------------------------------------------
HP 3000 versus DEC VAX Cost of Ownership Comparisons
------------------------------------------------------------------------
128-User configuration
------------------------------------------------------------------------
List Support List Support
HP 3000 $106,500 $14,832 VAX 6000 $202,336 $21,240
Model 947LX Model 610 Sys.
SPU (FOS only) incl with Base OS incl
Media & doc. incl 408 128 MB RAM incl
Standard chassis incl 3,688 LAN incl
64 MB memory incl 2.0 GB disk 7,952 2,280
1.0 GB disk drive incl 691 128-user OS/media/doc 33,792 17,040
Console incl 224 Tape drive 7,0840 2,736
2.0 GD DDS DAT 1440 Console 665 96
tape drive incl
32 MB add. memory 4,000
________ ________ _________ ________
Total H/W- $110,500 $251,829
S/W cost
3-Year support $21,283 $43,392
3-Year C.O.O. $131,783 $295,221
------------------------------------------------------------------------
------------------------------------------------------------------------
256-User configuration
------------------------------------------------------------------------
List Support List Support
HP 3000 987SX $227,800 $21,000 VAX 6000 Model $361,768 $38,160
620 Sys.
SPU (FOS only) incl w/Open VMS Base OS Lic. incl
Standard chassis incl 36,040 LAN incl
Media & doc. incl 408 128 MB RAM incl
64 MB memory incl 4 GB disks 21,520 4,560
128 MB memory incl 1 GB disk 5,680 1,560
2 GB disk 6,900 691 256-user OS 33,792 5,400
1 GB disk incl 576 Tape drive 8,868 2,736
2 GB disk 6,900 691 Console 665 96
2.0 DDS DAT incl 1,440 Media & doc. incl 6,960
tape drive
Console incl 224
__________ ________ __________ ________
Total H/W S/W $260,800 $432,293
3-Year support $61,070 $55,368
3-Year C.O.O. $321,870 $487,661
------------------------------------------------------------------------
Note: For additional cost of ownership comparisons between the HP 3000
and DEC VAX, please refer to the CSY Hotline article "Compare".
From Selling Against the Competition Competitive Binder, 5091-6465E,
9301
Associated files: DECAPX01.gal, DECAPX02.gal, DECAPX03.gal,
DECAPX04.gal, DECAPX01.hpg, DECAPX02.hpg, DECAPX03.hpg, DECAPX04.hpg,
DECAPX07.hpg, DECAPX07.plt, DECAPX08,hpg, DECAPX08.gal, DECAPX09,hpg,
DECAPX09.gal, DECAPX10.hpg, DECAPX10.gal, DECAPX05.hpg, DECAPX05.gal,
DECAPX06.hpg, DECAPX06.gal, decapx.doc
DEC Appendix